-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prevent duplicate auth token activity updates #29357
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
ChristophWurst
requested review from
nickvergessen,
icewind1991,
blizzz,
juliusknorr and
CarlSchwan
October 21, 2021 12:20
ChristophWurst
force-pushed
the
fix/concurrent-duplicate-auth-token-updates
branch
from
October 21, 2021 13:11
5c9ab5d
to
238ac12
Compare
nickvergessen
approved these changes
Oct 22, 2021
The auth token activity logic works as follows * Read auth token * Compare last activity time stamp to current time * Update auth token activity if it's older than x seconds This works fine in isolation but with concurrency that means that occasionally the same token is read simultaneously by two processes and both of these processes will trigger an update of the same row. Affectively the second update doesn't add much value. It might set the time stamp to the exact same time stamp or one a few seconds later. But the last activity is no precise science, we don't need this accuracy. This patch changes the UPDATE query to include the expected value in a comparison with the current data. This results in an affected row when the data in the DB still has an old time stamp, but won't affect a row if the time stamp is (nearly) up to date. This is a micro optimization and will possibly not show any significant performance improvement. Yet in setups with a DB cluster it means that the write node has to send fewer changes to the read nodes due to the lower number of actual changes. Signed-off-by: Christoph Wurst <[email protected]>
ChristophWurst
force-pushed
the
fix/concurrent-duplicate-auth-token-updates
branch
from
October 22, 2021 07:32
238ac12
to
7dd7256
Compare
skjnldsv
approved these changes
Oct 22, 2021
/backport to stable22 |
/backport to stable21 |
Combined with #1037 this gives zero token updates. We did it @icewind1991! We did it! 😎 |
5 tasks
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The auth token activity logic works as follows
This works fine in isolation but with concurrency that means that
occasionally the same token is read simultaneously by two processes and
both of these processes will trigger an update of the same row.
Affectively the second update doesn't add much value. It might set the
time stamp to the exact same time stamp or one a few seconds later. But
the last activity is no precise science, we don't need this accuracy.
This patch changes the UPDATE query to include the expected value in a
comparison with the current data. This results in an affected row when
the data in the DB still has an old time stamp, but won't affect a row
if the time stamp is (nearly) up to date.
This is a micro optimization and will possibly not show any significant
performance improvement. Yet in setups with a DB cluster it means that
the write node has to send fewer changes to the read nodes due to the
lower number of actual changes.
How to test
$update->executeStatement();
to a var. It contains the number of affected rows.last_activity
to 0 for the auth token that you want to test, e.g. the browser session tokenYou should see that the number of affected rows is 1 for the first process but 0 for the others.